Replace uses of fmt.Sprintf and fmt.Errorf with better alternatives#43413
Replace uses of fmt.Sprintf and fmt.Errorf with better alternatives#43413
fmt.Sprintf and fmt.Errorf with better alternatives#43413Conversation
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: e558d85 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | docker_containers_cpu | % cpu utilization | +5.08 | [+2.07, +8.10] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | docker_containers_cpu | % cpu utilization | +5.08 | [+2.07, +8.10] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | +0.88 | [+0.66, +1.10] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_metrics | memory utilization | +0.37 | [+0.22, +0.51] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | +0.32 | [+0.23, +0.42] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.13 | [+0.05, +0.20] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.11 | [-0.29, +0.51] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.09 | [+0.04, +0.14] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.07 | [-0.07, +0.22] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.02 | [-0.11, +0.15] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | +0.01 | [-0.13, +0.14] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.07, +0.07] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.01 | [-0.42, +0.41] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.01 | [-0.06, +0.04] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.02 | [-0.39, +0.36] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.04 | [-0.09, +0.01] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.06 | [-0.11, -0.01] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | -0.12 | [-0.33, +0.10] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.12 | [-0.17, -0.07] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_logs | memory utilization | -0.25 | [-0.33, -0.18] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.47 | [-0.70, -0.23] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | -0.77 | [-0.85, -0.68] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.83 | [-1.03, -0.63] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | -1.89 | [-3.32, -0.45] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
f8db5b9 to
f0f96ce
Compare
f0f96ce to
a17fee3
Compare
|
@codex review |
|
Codex Review: Didn't find any major issues. Bravo. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
2152ad2 to
841e7d6
Compare
841e7d6 to
d3d234d
Compare
d3d234d to
13b8724
Compare
|
View all feedbacks in Devflow UI.
Post "https://gitlab.ddbuild.io/api/v4/projects/DataDog%2Fdatadog-agent/pipeline": context deadline exceeded (Client.Timeout exceeded while awaiting headers) DetailsIf you need support, contact us on Slack #devflow with those details! |
|
/trigger-ci |
|
View all feedbacks in Devflow UI.
Started pipeline #84318398 |
pgimalac
left a comment
There was a problem hiding this comment.
LGTM for Agent Runtimes and e2e testing
There was a problem hiding this comment.
Should the linter be disabled on test files ?
There was a problem hiding this comment.
Agreed, readability is decreased with string concatenation compared to fmt.Sprintf, and performance should not really be a concern on test files/assertions ?
There was a problem hiding this comment.
There was a problem hiding this comment.
Same question for the test package
There was a problem hiding this comment.
lavigne958
left a comment
There was a problem hiding this comment.
Due to the large number of changes in this pull request, only one file is being shown at a time.
Apparently Github has recently learnt something, probably due to: #43238 😆
LGTM !
|
You need to switch back to the "old experience" to view all files 🙃 |
anais-raison
left a comment
There was a problem hiding this comment.
LGTM for @DataDog/agent-apm !
adel121
left a comment
There was a problem hiding this comment.
LGTM for @DataDog/container-platform
|
package main
import (
"fmt"
"os"
"strings"
"testing"
)
var (
key = os.Args[0]
value = os.Args[0]
expected = key + ":" + value
)
func BenchmarkSprintf(b *testing.B) {
var tag string
for b.Loop() {
tag = fmt.Sprintf("%s:%s", key, value)
}
if tag != expected {
b.Fatal()
}
}
func BenchmarkConcat(b *testing.B) {
var tag string
for b.Loop() {
tag = key + ":" + value
}
if tag != expected {
b.Fatal()
}
}
func BenchmarkStringBuilder(b *testing.B) {
var tag strings.Builder
for b.Loop() {
tag.Reset()
tag.Grow(len(key) + 1 + len(value))
tag.WriteString(key)
tag.WriteString(":")
tag.WriteString(value)
}
if tag.String() != expected {
b.Fatal()
}
}% go test -bench=. -count=10 | benchstat -
goos: darwin
goarch: arm64
cpu: Apple M4 Max
│ - │
│ sec/op │
Sprintf-16 66.27n ± 3%
Concat-16 26.36n ± 2%
StringBuilder-16 28.79n ± 6%
geomean 36.91nInternally concatenation uses runtime function % go test -c
% go tool objdump -s BenchmarkConcat benchconcat.test | grep CALL
main_test.go:32 0x10010c850 97fd3360 CALL runtime.concatstring3(SB)
benchmark.go:514 0x10010c884 97feed0b CALL testing.(*B).loopSlowPath(SB)
main_test.go:35 0x10010c8bc 97fbd565 CALL runtime.memequal(SB)
main_test.go:36 0x10010c8d4 97ff12a3 CALL testing.(*common).Fatal(SB)
main_test.go:28 0x10010c8ec 97fda7ed CALL runtime.morestack_noctxt.abi0(SB)
% go tool objdump -s BenchmarkStringBuilder benchconcat.test | grep CALL
main_test.go:44 0x10010c92c 97feb891 CALL strings.(*Builder).Reset(SB)
main_test.go:45 0x10010c94c 97feb8d1 CALL strings.(*Builder).Grow(SB)
main_test.go:46 0x10010c960 97feb998 CALL strings.(*Builder).WriteString(SB)
main_test.go:47 0x10010c974 97feb993 CALL strings.(*Builder).WriteString(SB)
main_test.go:48 0x10010c988 97feb98e CALL strings.(*Builder).WriteString(SB)
benchmark.go:514 0x10010c9b0 97feecc0 CALL testing.(*B).loopSlowPath(SB)
main_test.go:51 0x10010c9ec 97fbd519 CALL runtime.memequal(SB)
main_test.go:52 0x10010ca04 97ff1257 CALL testing.(*common).Fatal(SB)
builder.go:47 0x10010ca18 97fd7eae CALL runtime.panicunsafestringlen(SB)
builder.go:47 0x10010ca1c 97fd7ebd CALL runtime.panicunsafestringnilptr(SB)
main_test.go:40 0x10010ca2c 97fda79d CALL runtime.morestack_noctxt.abi0(SB)see https://go.dev/src/runtime/string.go and https://go.dev/src/strings/builder.go |
What does this PR do?
Enable the
perfsprintlinter that:and also address all the suggested changes.
Motivation
Inefficient strings manipulations are currently common.
But when they happen in a hot path, the performance impact can be measurable as demonstrated in #43407.
For better code consistency, let’s apply the efficient patterns everywhere.
Describe how you validated your changes
No functional or behavioral change is expected. The new code should behave exactly like the old one.
The changes are very mechanical. They have all been generated by tools like
perfsprintandgoimports.So, we can rely on existing unit and e2e tests run by the CI.
Additional Notes
Micro-benchmark:
Real-case benchmark: see #43407.
Asked by the community: #38136.
Profiles comparison between the last nightly version that didn’t have this PR and the first nightly version that has it:
